17 research outputs found

    Covariate Adjustment in Bayesian Adaptive Clinical Trials

    Full text link
    In conventional randomized controlled trials, adjustment for baseline values of covariates known to be at least moderately associated with the outcome increases the power of the trial. Recent work has shown particular benefit for more flexible frequentist designs, such as information adaptive and adaptive multi-arm designs. However, covariate adjustment has not been characterized within the more flexible Bayesian adaptive designs, despite their growing popularity. We focus on a subclass of these which allow for early stopping at an interim analysis given evidence of treatment superiority. We consider both collapsible and non-collapsible estimands, and show how to obtain posterior samples of marginal estimands from adjusted analyses. We describe several estimands for three common outcome types. We perform a simulation study to assess the impact of covariate adjustment using a variety of adjustment models in several different scenarios. This is followed by a real world application of the compared approaches to a COVID-19 trial with a binary endpoint. For all scenarios, it is shown that covariate adjustment increases power and the probability of stopping the trials early, and decreases the expected sample sizes as compared to unadjusted analyses.Comment: 17 pages, 5 tables, 4 figure

    Double robust estimation of optimal partially adaptive treatment strategies : an application to breast cancer treatment using hormonal therapy

    No full text
    Precision medicine aims to tailor treatment decisions according to patients' characteristics. G-estimation and dynamic weighted ordinary least squares are double robust methods to identify optimal adaptive treatment strategies. It is underappreciated that they require modeling all existing treatment-confounder interactions to be consistent. Identifying optimal partially adaptive treatment strategies that tailor treatments according to only a few covariates, ignoring some interactions, may be preferable in practice. Building on G-estimation and dWOLS, we propose estimators of such partially adaptive strategies and demonstrate their double robustness. We investigate these estimators in a simulation study. Using data maintained by the Centre des Maladies du Sein, we estimate a partially adaptive treatment strategy for tailoring hormonal therapy use in breast cancer patients. R software implementing our estimators is provided

    Variable Selection for Individualized Treatment Rules with Discrete Outcomes

    Full text link
    An individualized treatment rule (ITR) is a decision rule that aims to improve individual patients health outcomes by recommending optimal treatments according to patients specific information. In observational studies, collected data may contain many variables that are irrelevant for making treatment decisions. Including all available variables in the statistical model for the ITR could yield a loss of efficiency and an unnecessarily complicated treatment rule, which is difficult for physicians to interpret or implement. Thus, a data-driven approach to select important tailoring variables with the aim of improving the estimated decision rules is crucial. While there is a growing body of literature on selecting variables in ITRs with continuous outcomes, relatively few methods exist for discrete outcomes, which pose additional computational challenges even in the absence of variable selection. In this paper, we propose a variable selection method for ITRs with discrete outcomes. We show theoretically and empirically that our approach has the double robustness property, and that it compares favorably with other competing approaches. We illustrate the proposed method on data from a study of an adaptive web-based stress management tool to identify which variables are relevant for tailoring treatment

    Characterizing patterns in police stops by race in Minneapolis from 2016-2021

    Full text link
    The murder of George Floyd centered Minneapolis, Minnesota, in conversations on racial injustice in the US. We leverage open data from the Minneapolis Police Department to analyze individual, geographic, and temporal patterns in more than 170,000 police stops since 2016. We evaluate person and vehicle searches at the individual level by race using generalized estimating equations with neighborhood clustering, directly addressing neighborhood differences in police activity. Minneapolis exhibits clear patterns of disproportionate policing by race, wherein Black people are searched at higher rates compared to White people. Temporal visualizations indicate that police stops declined following the murder of George Floyd. This analysis provides contemporary evidence on the state of policing for a major metropolitan area in the United States
    corecore